artificial intelligence-driven machine
Artificial Intelligence-driven machines can be fooled, warn IISc researchers
Machine-learning and artificial intelligence algorithms used in sophisticated applications such as for autonomous cars are not foolproof and can be easily manipulated by introducing errors, Indian Institute of Science (IISc) researchers have warned. Machine-learning and AI software are trained with initial sets of data such as images of cats and it learns to identify feline images as more such data are fed. A common example is Google throwing up better results as more people search for the same information. Use of AI applications is becoming mainstream in areas such as healthcare, payments processing, deploying drones to monitor crowds, and for facial recognition in offices and airports. "If your data input is not clear and vetted, the AI machine could throw up surprising results and that could end up being hazardous. In autonomous driving, the AI engine should be trained properly on all road signs. If the input sign is different, then it could change the course of the vehicle, leading to a catastrophe," R Venkatesha Babu, Associate Professor at IISc's Department of Computational Sciences, told ET. "The system also needs to have enough cyber security measures to prevent hackers from intruding and altering inputs," he said.
- Information Technology > Security & Privacy (0.79)
- Transportation > Ground > Road (0.57)
- Information Technology > Robotics & Automation (0.57)